-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extend nonzero
to int64
#125850
base: main
Are you sure you want to change the base?
extend nonzero
to int64
#125850
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125850
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 3 Unrelated FailuresAs of commit a46722a with merge base 8f30f36 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a (presumably large tensor) test for this?
Do we had an |
Unfortunately these are not really unified at the moment, but this should surface some examples: https://github.com/search?q=repo%3Apytorch%2Fpytorch+64bit+language%3APython+path%3A%2F%5Etest%5C%2F%2F&type=code |
As we don't have a specific CUDA test do we want to find a workaround from python? Can you suggest one from |
I think I am going to close this as Probably we need to wait upstream for NVIDIA/cccl#1422 What do you think? |
@ezyang Do you think we can we open a new ticket to lower this with Trition Edit: |
using flag_iterator_t = cub::NullType*; | ||
using equality_op_t = cub::NullType; | ||
|
||
return cub::DispatchSelectIf< |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this requries cub/cccl 2.4.0?
yes, need a big tensor test. @eqy's link is good for examples |
Ok thanks, |
Just to check if it could compile at least with the current CUB version. /usr/local/cuda/include/cub/agent/agent_select_if.cuh(264): error: function "at::native::<unnamed>::NonZeroOp<T>::operator() [with T=c10::complex<c10::Half>]" cannot be called with the given argument list
argument types are: (int64_t)
object type is: at::native::<unnamed>::NonZeroOp<c10::complex<c10::Half>>
selection_flags[ITEM] = select_op(items[ITEM]); |
I think we need So this mean that we need to wait for the next cuda 12.4 update and make it also conditional. |
This PR seems fine. I agree you may need to preprocessor your way to victory. CI will say. |
if you're willing to wait for cuda 12.5 :) |
This version is required for the workaround API. A full upstream solution it will require to wait more CUDA releases. |
Sorry, I'm not sure I understand the state of this PR. I would happily accept a PR that makes nonzero int64 work on CUDA 12.5 or later, subject to the requirement that this functionality is preprocessored out. If you don't mind waiting, we can also ice this PR until CUDA 12.6 shows up by default and then we can just land it as is. |
The current status for testing/using the workaround in this PR is to have For a "regular" upstream solution we need that this will be merged: And after that we need an official CUB release and that release will be included in a CUDA release. |
@atalman for CUDA 12.5 |
Fixes #51871